16 research outputs found

    Learning Probabilistic Systems from Tree Samples

    Full text link
    We consider the problem of learning a non-deterministic probabilistic system consistent with a given finite set of positive and negative tree samples. Consistency is defined with respect to strong simulation conformance. We propose learning algorithms that use traditional and a new "stochastic" state-space partitioning, the latter resulting in the minimum number of states. We then use them to solve the problem of "active learning", that uses a knowledgeable teacher to generate samples as counterexamples to simulation equivalence queries. We show that the problem is undecidable in general, but that it becomes decidable under a suitable condition on the teacher which comes naturally from the way samples are generated from failed simulation checks. The latter problem is shown to be undecidable if we impose an additional condition on the learner to always conjecture a "minimum state" hypothesis. We therefore propose a semi-algorithm using stochastic partitions. Finally, we apply the proposed (semi-) algorithms to infer intermediate assumptions in an automated assume-guarantee verification framework for probabilistic systems.Comment: 14 pages, conference paper with full proof

    The SeaHorn Verification Framework

    Get PDF
    In this paper, we present SeaHorn, a software verification framework. The key distinguishing feature of SeaHorn is its modular design that separates the concerns of the syntax of the programming language, its operational semantics, and the verification semantics. SeaHorn encompasses several novelties: it (a) encodes verification conditions using an efficient yet precise inter-procedural technique, (b) provides flexibility in the verification semantics to allow different levels of precision, (c) leverages the state-of-the-art in software model checking and abstract interpretation for verification, and (d) uses Horn-clauses as an intermediate language to represent verification conditions which simplifies interfacing with multiple verification tools based on Horn-clauses. SeaHorn provides users with a powerful verification tool and researchers with an extensible and customizable framework for experimenting with new software verification techniques. The effectiveness and scalability of SeaHorn are demonstrated by an extensive experimental evaluation using benchmarks from SV-COMP 2015 and real avionics code

    Differentially Testing Soundness and Precision of Program Analyzers

    Full text link
    In the last decades, numerous program analyzers have been developed both by academia and industry. Despite their abundance however, there is currently no systematic way of comparing the effectiveness of different analyzers on arbitrary code. In this paper, we present the first automated technique for differentially testing soundness and precision of program analyzers. We used our technique to compare six mature, state-of-the art analyzers on tens of thousands of automatically generated benchmarks. Our technique detected soundness and precision issues in most analyzers, and we evaluated the implications of these issues to both designers and users of program analyzers

    Analysis and verification of the HMGB1 signaling pathway

    Get PDF
    Background\ud Recent studies have found that overexpression of the High-mobility group box-1 (HMGB1) protein, in conjunction with its receptors for advanced glycation end products (RAGEs) and toll-like receptors (TLRs), is associated with proliferation of various cancer types, including that of the breast and pancreatic.\ud \ud Results\ud We have developed a rule-based model of crosstalk between the HMGB1 signaling pathway and other key cancer signaling pathways. The model has been simulated using both ordinary differential equations (ODEs) and discrete stochastic simulation. We have applied an automated verification technique, Statistical Model Checking, to validate interesting temporal properties of our model.\ud \ud Conclusions\ud Our simulations show that, if HMGB1 is overexpressed, then the oncoproteins CyclinD/E, which regulate cell proliferation, are overexpressed, while tumor suppressor proteins that regulate cell apoptosis (programmed cell death), such as p53, are repressed. Discrete, stochastic simulations show that p53 and MDM2 oscillations continue even after 10 hours, as observed by experiments. This property is not exhibited by the deterministic ODE simulation, for the chosen parameters. Moreover, the models also predict that mutations of RAS, ARF and P21 in the context of HMGB1 signaling can influence the cancer cell's fate - apoptosis or survival - through the crosstalk of different pathways

    Exploring Polygonal Environments by Simple Robots with Faulty Combinatorial Vision

    No full text
    We study robustness issues of basic exploration tasks of simple robots inside a polygon p when sensors provide possibly faulty information about the unlabelled environment p. Ideally, the simple robot we consider is able to sense the number and the order of visible vertices, and can move to any such visible vertex. Additionally, the robot senses whether two visible vertices form an edge of p. We call this sensing a combinatorial vision. The robot can use pebbles to mark vertices. If there is a visible vertex with a pebble, the robot knows (senses) the index of this vertex in the list of visible vertices in counterclockwise order. It has been shown [1] that such a simple robot, using one pebble, can virtually label the visible vertices with their global indices, and navigate consistently in p. This allows, for example, to compute the map or a triangulation of p. In this paper we revisit some of these computational tasks in a faulty environment, in that we model situations where the sensors “see” two visible vertices as one vertex. In such a situation, we show that a simple robot with one pebble cannot even compute the number of vertices of p. We conjecture (and discuss) that this is neither possible with two pebbles. We then present an algorithm that uses three pebbles of two types, and allows the simple robot to count the vertices of p. Using this algorithm as a subroutine, we present algorithms that reconstruct the map of p, as well as the correct visibility at every vertex of p

    Connectivity preserving transformations for higher dimensional binary images

    Get PDF
    AbstractAn N-dimensional digital binary image (I) is a function I:ZN→{0,1}. I is B3N−1,W3N−1 connected if and only if its black pixels and white pixels are each (3N−1)-connected. I is only B3N−1 connected if and only if its black pixels are (3N−1)-connected. For a 3-D binary image, the respective connectivity models are B26,W26 and B26. A pair of (3N−1)-neighboring opposite-valued pixels is called interchangeable in a N-D binary image I, if reversing their values preserves the original connectedness. We call such an interchange to be a (3N−1)-local interchange. Under the above connectivity models, we show that given two binary images of n pixels/voxels each, we can transform one to the other using a sequence of (3N−1)-local interchanges. The specific results are as follows. Any two B26-connected 3-dimensional images I and J each having n black voxels are transformable using a sequence of O((c1+c2)n2) 26-local interchanges. Here, c1 and c2 are the total number of 8-connected components in all 2-dimensional layers of I and J respectively. We also show bounds on B26 connectivity under a different interchange model as proposed in [A. Dumitrescu, J. Pach, Pushing squares around, Graphs and Combinatorics 22 (1) (2006) 37–50]. Next, we show that any two simply connected images under the B26, W26 connectivity model and each having n black voxels are transformable using a sequence of O(n2) 26-local interchanges. We generalize this result to show that any two B3N−1, W3N−1-connected N-dimensional simply connected images each having n black pixels are transformable using a sequence of O(Nn2)(3N−1)-local interchanges, where N>1

    Backward reasoning with formal properties: a methodology for bug isolation on simulation traces

    No full text
    Automated methods for bug localization for hardware designs typically work on the design implementation to root-cause a given bug. This paper presents a novel debugging approach where instead of using the design implementation in the debugging process, we use causal deduction using formal properties scattered across the design to locate the bug. This has two advantages, namely, (a) the reasoning takes place in the property space instead of the state space of the implementation, which enhances scalability, and (b) new properties can be added in hindsight to perform what-if analysis, which is less expensive than modifying the implementation for each alternative. Experimental results demonstrate the scalability of the approach in debugging designs with large property suites

    assumption generation for asynchronous systems by abstraction refinement

    No full text
    Compositional verification provides a way for deducing properties of a complete program from properties of its constituents. In particular, the assume-guarantee style of reasoning splits a specification into assumptions and guarantees according to a given inference rule and the generation of assumptions through machine learning makes the automatic reasoning possible. However, existing works are purely focused on the synchronous parallel composition of Labeled Transition Systems (LTSs) or Kripke Structures, while it is more natural to model real software programs in the asynchronous framework. In this paper, shared variable structures are used as system models and asynchronous parallel composition of shared variable structures is defined. Based on a new simulation relation introduced in this paper, we prove that an inference rule, which has been widely used in the literature, holds for asynchronous systems as long as the components' alphabets satisfy certain conditions. Then, an automating assumption generation approach is proposed based on counterexample-guided abstraction refinement, rather than using learning algorithms. Experimental results are provided to demonstrate the effectiveness of the proposed approach. © 2013 Springer-Verlag.Compositional verification provides a way for deducing properties of a complete program from properties of its constituents. In particular, the assume-guarantee style of reasoning splits a specification into assumptions and guarantees according to a given inference rule and the generation of assumptions through machine learning makes the automatic reasoning possible. However, existing works are purely focused on the synchronous parallel composition of Labeled Transition Systems (LTSs) or Kripke Structures, while it is more natural to model real software programs in the asynchronous framework. In this paper, shared variable structures are used as system models and asynchronous parallel composition of shared variable structures is defined. Based on a new simulation relation introduced in this paper, we prove that an inference rule, which has been widely used in the literature, holds for asynchronous systems as long as the components' alphabets satisfy certain conditions. Then, an automating assumption generation approach is proposed based on counterexample-guided abstraction refinement, rather than using learning algorithms. Experimental results are provided to demonstrate the effectiveness of the proposed approach. © 2013 Springer-Verlag
    corecore